# Downstream task fine-tuning

Skin Disease Classifier
This is a transformers model hosted on Hugging Face Hub, with its specific functions and purposes not clearly stated.
Large Language Model Transformers
S
muhammadnoman76
28
0
Log Anomaly Detection Model New
This is a transformers model hosted on Hugging Face Hub. Specific functions and uses need to be supplemented.
Large Language Model Transformers
L
Dumi2025
652
1
Deepfake Detector Faceforensics
This is a transformers model hosted on the Hugging Face Hub. Specific functionalities and uses require further information.
Large Language Model Transformers
D
HrutikAdsare
57
1
Qwen2.5 Coder 7B Instruct Bd
This is a transformers model hosted on Hugging Face Hub. Specific functions and uses require further information.
Large Language Model Transformers
Q
nomic-ai
44
1
Mbert Lstm Sentiment Analysis
This is a transformers model hosted on Hugging Face Hub, with no specific functionality or purpose clearly stated.
Large Language Model Transformers
M
simoneprete
691
1
Qwen2.5 1.5B Apeach
This is a transformers model hosted on Hugging Face Hub, with specific functions and purposes not clearly described.
Large Language Model Transformers
Q
jason9693
49.16k
3
Phien Table Structure Recognition 143
This is a transformers model hosted on Hugging Face Hub. Specific functions and uses require further information.
Large Language Model Transformers
P
trungphien
27
0
Prometheus Bgb 8x7b V2.0
This is a transformers model hosted on Hugging Face Hub, with no specific functionality or purpose clearly stated.
Large Language Model Transformers
P
prometheus-eval
772
6
Bert Cefr Model2
This is a transformers model pushed to the Hugging Face Hub. The specific functions and detailed information are to be supplemented.
Large Language Model Transformers
B
kalobiralo
1,804
4
Tinymistral 248M GGUF
Apache-2.0
TinyMistral-248M is a small language model based on pre-training from the Mistral 7B model, with parameters scaled down to approximately 248 million, primarily used for fine-tuning downstream tasks.
Large Language Model English
T
afrideva
211
5
Roberta Base Serbian
This is a Serbian (Cyrillic and Latin scripts) RoBERTa model pretrained on srWaC, suitable for downstream task fine-tuning.
Large Language Model Transformers Other
R
KoichiYasuoka
20
1
Roberta Small Belarusian
This is a RoBERTa model pretrained on the CC-100 dataset, suitable for Belarusian text processing tasks.
Large Language Model Transformers Other
R
KoichiYasuoka
234
5
Albert Small V2
ALBERT Small v2 is a 6-layer lightweight version of ALBERT-base-v2, based on the Transformer architecture, suitable for natural language processing tasks.
Large Language Model Transformers
A
nreimers
62
0
Albert Xxlarge V2
Apache-2.0
ALBERT XXLarge v2 is a large language model pre-trained with masked language modeling objectives, featuring a parameter-shared Transformer architecture with 12 repeated layers and 223 million parameters.
Large Language Model English
A
albert
19.79k
20
Roberta Base Thai Char
Apache-2.0
This is a RoBERTa model pre-trained on Thai Wikipedia text, using character-level embeddings to adapt to BertTokenizerFast.
Large Language Model Transformers Other
R
KoichiYasuoka
23
0
Roberta Small Japanese Aozora
A small Japanese RoBERTa model pre-trained on Aozora Bunko texts, suitable for various downstream NLP tasks
Large Language Model Transformers Japanese
R
KoichiYasuoka
19
0
Roberta Small Japanese Aozora Char
A RoBERTa model pretrained on Aozora Bunko texts using a character tokenizer, suitable for Japanese text processing tasks.
Large Language Model Transformers Japanese
R
KoichiYasuoka
26
1
Gpt Neo 125m
MIT
GPT-Neo 125M is a Transformer model based on the GPT-3 architecture, developed by EleutherAI, with 125 million parameters, primarily used for English text generation tasks.
Large Language Model English
G
EleutherAI
150.96k
204
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase